Data Distribution Analysis and Optimization for Pointer-Based Distributed Programs
نویسندگان
چکیده
High Performance Fortran (HPF) provides distributed arrays to eeciently support a global name space on distributed memory architectures. The distributed data structures supported by HPF, however , are only limited to array constructs, and do not extend to pointer-based distributed structures. With the support of distributed pointers and class abstractions in parallel C++-like languages, application programmers now can build their own pointer-based aggregate objects to provide a global name space on distributed environments. However, a critical question remains if the compiler can understand the distribution pattern of pointer-based distributed objects built by application programmers, and perform optimization as eeectively as the HPF compiler does with distributed arrays. In this paper , we address this challenging issue. In our work, we rst present a parallel programming model which allows application programmers to build pointer-based distributed objects at application levels. Next, we propose a distribution analysis algorithm which can automatically summarize the distribution pattern of pointer-based distributed objects built by application programmers. Our work, to our best knowledge, is the rst work to attempt to address this open issue. Our distribution analysis framework employs Feautrier's parametric integer programming as the basic solver, and can always obtain precise distribution information from the class of programs written in our parallel programming model with static control. Experimental results done on a 16-node IBM SP-2 machine show that the compiler with the help of distribution analysis algorithm can signiicantly improve the performance of pointer-based distributed programs.
منابع مشابه
Data Distribution Analysis and Optimization for Pointer-Based Distributed Programs - Parallel Processing, 1997., Proceedings of the 1997 International Conference on
A critical question remains open if the compiler can understand the distribution pattern of pointer-based distributed objects built by application programmers, and perf o r m optimization as effectively as the HPF compiler does with distributed arrays. In this paper, we address this challenging issue. In our work, we f irs t present a parallel programming model whzch allows application programm...
متن کاملSpmd Execution of Programs with Pointer-based Dynamic Data Structures
This paper discusses an approach for supporting SPMD (single-program, multiple-data) execution of programs with pointer-based data structures on distributed-memory machines. Through a combination of language design and new compilation techniques, static and dynamic implicit parallelism present in sequential programs based upon pointer-based data structures is exploited. Language support is prov...
متن کاملExploiting Locality and Parallelism in Pointer-based Programs
While powerful optimization techniques are currently available for limited automatic compilation domains, such as dense array-based scientific and engineering numerical codes, a similar level of success has eluded general-purpose programs, specially symbolic and pointer-based codes. Current compilers are not able to successfully deal with parallelism in those codes. Based on our previously deve...
متن کاملA harmony search-based approach for real-time volt & var control in distribution network by considering distributed generations units
In recent decade, development of telecommunications infrastructure has led to rapid exchange of data between the distribution network components and the control center in many developed countries. These changes, considering the numerous benefits of the Distributed Generators (DGs), have made more motivations for distribution companies to utilize these kinds of generators more than ever before. ...
متن کاملLLVM Optimizations for PGAS Programs Case study: LLVMWide Pointer Optimizations in Chapel
PGAS programming languages such as Chapel, Coarray Fortran, Habanero-C, UPC and X10 [3–6, 8] support high-level and highly productive programming models for large-scale parallelism. Unlike messagepassing models such as MPI, which introduce nontrivial complexity due to message passing semantics, PGAS languages simplify distributed parallel programming by introducing higher level parallel languag...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 1997